Close

1. Identity statement
Reference TypeConference Paper (Conference Proceedings)
Sitesibgrapi.sid.inpe.br
Holder Codeibi 8JMKD3MGPEW34M/46T9EHH
Identifier8JMKD3MGPBW4/35THUHP
Repositorysid.inpe.br/sibgrapi@80/2009/08.26.16.11
Last Update2009:08.26.16.11.51 (UTC) administrator
Metadata Repositorysid.inpe.br/sibgrapi@80/2009/08.26.16.11.52
Metadata Last Update2022:07.30.18.33.29 (UTC) administrator
DOI10.1109/SIBGRAPI.2009.17
Citation KeyLopesOlivAlmeAraú:2009:SpFrBa
TitleSpatio-Temporal Frames in a Bag-of-visual-features Approach for Human Actions Recognition
FormatPrinted, On-line.
Year2009
Access Date2024, Apr. 29
Number of Files1
Size1083 KiB
2. Context
Author1 Lopes, Ana Paula Brandão
2 Oliveira, Rodrigo Silva
3 Almeida, Jussara Marques de
4 Araújo, Arnaldo de Albuquerque
Affiliation1 Federal University of Minas Gerais/State University of Santa Cruz
2 Federal University of Minas Gerais
3 Federal University of Minas Gerais
4 Federal University of Minas Gerais
EditorNonato, Luis Gustavo
Scharcanski, Jacob
e-Mail Addresspaula@dcc.ufmg.br
Conference NameBrazilian Symposium on Computer Graphics and Image Processing, 22 (SIBGRAPI)
Conference LocationRio de Janeiro, RJ, Brazil
Date11-14 Oct. 2009
PublisherIEEE Computer Society
Publisher CityLos Alamitos
Book TitleProceedings
Tertiary TypeFull Paper
History (UTC)2010-08-28 20:03:28 :: paula@dcc.ufmg.br -> administrator ::
2022-07-30 18:33:29 :: administrator -> :: 2009
3. Content and structure
Is the master or a copy?is the master
Content Stagecompleted
Transferable1
Version Typefinaldraft
KeywordsHuman Actions
Bag-of-Visual-Features
Video classification
AbstractThe recognition of human actions from videos has several interesting and important applications, and a vast amount of different approaches has been proposed for this task in different settings. Such approaches can be broadly categorized in model-based and model-free. Typically, model-based approaches work only in very constrained settings, and because of that, a number of model-free approaches appeared in the last years. Among them, those based in bag-of-visual-features (BoVF) have been proving to be the most consistently successful, being used by several independent authors. For videos to be represented by BoVFs, though, an important issue that arises is how to represent dynamic information. Most existing proposals consider the video as a spatio-temporal volume and then describe volumetric patches around 3D interest points. In this work, we propose to build a BoVF representation for videos by collecting 2D interest points directly. The basic idea is to gather such points not only from the traditional frames (xy planes), but also from those planes along the time axis, which we call the spatio-temporal frames. Our assumption is that such features are able to capture dynamic information from the videos, and are therefore well-suited to recognize human actions from them, without the need of 3D extensions for the descriptors. In our experiments, this approach achieved state-of-the-art recognition rates on a well-known human actions database, even when compared to more sophisticated schemes.
Arrangement 1urlib.net > SDLA > Fonds > SIBGRAPI 2009 > Spatio-Temporal Frames in...
Arrangement 2urlib.net > SDLA > Fonds > Full Index > Spatio-Temporal Frames in...
doc Directory Contentaccess
source Directory Contentthere are no files
agreement Directory Contentthere are no files
4. Conditions of access and use
data URLhttp://urlib.net/ibi/8JMKD3MGPBW4/35THUHP
zipped data URLhttp://urlib.net/zip/8JMKD3MGPBW4/35THUHP
Languageen
Target Filesibgrapi-actions-2009-FINAL-5-no-bookmarks.pdf
User Grouppaula@dcc.ufmg.br
Visibilityshown
5. Allied materials
Mirror Repositorysid.inpe.br/banon/2001/03.30.15.38.24
Next Higher Units8JMKD3MGPEW34M/46SJQ2S
8JMKD3MGPEW34M/4742MCS
Citing Item Listsid.inpe.br/sibgrapi/2022/05.14.19.43 3
Host Collectionsid.inpe.br/banon/2001/03.30.15.38
6. Notes
Empty Fieldsarchivingpolicy archivist area callnumber contenttype copyholder copyright creatorhistory descriptionlevel dissemination documentstage edition electronicmailaddress group isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder schedulinginformation secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url volume


Close